Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Analyst ; 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38619825

RESUMO

Radiation-induced lung injury (RILI) is a dose-limiting toxicity for cancer patients receiving thoracic radiotherapy. As such, it is important to characterize metabolic associations with the early and late stages of RILI, namely pneumonitis and pulmonary fibrosis. Recently, Raman spectroscopy has shown utility for the differentiation of pneumonitic and fibrotic tissue states in a mouse model; however, the specific metabolite-disease associations remain relatively unexplored from a Raman perspective. This work harnesses Raman spectroscopy and supervised machine learning to investigate metabolic associations with radiation pneumonitis and pulmonary fibrosis in a mouse model. To this end, Raman spectra were collected from lung tissues of irradiated/non-irradiated C3H/HeJ and C57BL/6J mice and labelled as normal, pneumonitis, or fibrosis, based on histological assessment. Spectra were decomposed into metabolic scores via group and basis restricted non-negative matrix factorization, classified with random forest (GBR-NMF-RF), and metabolites predictive of RILI were identified. To provide comparative context, spectra were decomposed and classified via principal component analysis with random forest (PCA-RF), and full spectra were classified with a convolutional neural network (CNN), as well as logistic regression (LR). Through leave-one-mouse-out cross-validation, we observed that GBR-NMF-RF was comparable to other methods by measure of accuracy and log-loss (p > 0.10 by Mann-Whitney U test), and no methodology was dominant across all classification tasks by measure of area under the receiver operating characteristic curve. Moreover, GBR-NMF-RF results were directly interpretable and identified collagen and specific collagen precursors as top fibrosis predictors, while metabolites with immune and inflammatory functions, such as serine and histidine, were top pneumonitis predictors. Further support for GBR-NMF-RF and the identified metabolite associations with RILI was found as CNN interpretation heatmaps revealed spectral regions consistent with these metabolites.

2.
Analyst ; 149(5): 1645-1657, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38312026

RESUMO

Reprogramming of cellular metabolism is a driving factor of tumour progression and radiation therapy resistance. Identifying biochemical signatures associated with tumour radioresistance may assist with the development of targeted treatment strategies to improve clinical outcomes. Raman spectroscopy (RS) can monitor post-irradiation biomolecular changes and signatures of radiation response in tumour cells in a label-free manner. Convolutional Neural Networks (CNN) perform feature extraction directly from data in an end-to-end learning manner, with high classification performance. Furthermore, recently developed CNN explainability techniques help visualize the critical discriminative features captured by the model. In this work, a CNN is developed to characterize tumour response to radiotherapy based on its degree of radioresistance. The model was trained to classify Raman spectra of three human tumour cell lines as radiosensitive (LNCaP) or radioresistant (MCF7, H460) over a range of treatment doses and data collection time points. Additionally, a method based on Gradient-Weighted Class Activation Mapping (Grad-CAM) was used to determine response-specific salient Raman peaks influencing the CNN predictions. The CNN effectively classified the cell spectra, with accuracy, sensitivity, specificity, and F1 score exceeding 99.8%. Grad-CAM heatmaps of H460 and MCF7 cell spectra (radioresistant) exhibited high contributions from Raman bands tentatively assigned to glycogen, amino acids, and nucleic acids. Conversely, heatmaps of LNCaP cells (radiosensitive) revealed activations at lipid and phospholipid bands. Finally, Grad-CAM variable importance scores were derived for glycogen, asparagine, and phosphatidylcholine, and we show that their trends over cell line, dose, and acquisition time agreed with previously established models. Thus, the CNN can accurately detect biomolecular differences in the Raman spectra of tumour cells of varying radiosensitivity without requiring manual feature extraction. Finally, Grad-CAM may help identify metabolic signatures associated with the observed categories, offering the potential for automated clinical tumour radiation response characterization.


Assuntos
Redes Neurais de Computação , Análise Espectral Raman , Humanos , Análise Espectral Raman/métodos , Linhagem Celular Tumoral , Células MCF-7 , Glicogênio/metabolismo
3.
BJR Open ; 5(1): 20230008, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37953867

RESUMO

Objective: The microscopic analysis of biopsied lung nodules represents the gold-standard for definitive diagnosis of lung cancer. Deep learning has achieved pathologist-level classification of non-small cell lung cancer histopathology images at high resolutions (0.5-2 µm/px), and recent studies have revealed tomography-histology relationships at lower spatial resolutions. Thus, we tested whether patterns for histological classification of lung cancer could be detected at spatial resolutions such as those offered by ultra-high-resolution CT. Methods: We investigated the performance of a deep convolutional neural network (inception-v3) to classify lung histopathology images at lower spatial resolutions than that of typical pathology. Models were trained on 2167 histopathology slides from The Cancer Genome Atlas to differentiate between lung cancer tissues (adenocarcinoma (LUAD) and squamous-cell carcinoma (LUSC)), and normal dense tissue. Slides were accessed at 2.5 × magnification (4 µm/px) and reduced resolutions of 8, 16, 32, 64, and 128 µm/px were simulated by applying digital low-pass filters. Results: The classifier achieved area under the curve ≥0.95 for all classes at spatial resolutions of 4-16 µm/px, and area under the curve ≥0.95 for differentiating normal tissue from the two cancer types at 128 µm/px. Conclusions: Features for tissue classification by deep learning exist at spatial resolutions below what is typically viewed by pathologists. Advances in knowledge: We demonstrated that a deep convolutional network could differentiate normal and cancerous lung tissue at spatial resolutions as low as 128 µm/px and LUAD, LUSC, and normal tissue as low as 16 µm/px. Our data, and results of tomography-histology studies, indicate that these patterns should also be detectable within tomographic data at these resolutions.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...